Practical applications employing deep learning must guarantee inference quality. However, we found that the inference quality of state-of-the-art and state-of-the-practice in practical applications has a long tail distribution. In the real world, many tasks have strict requirements for the quality of deep learning inference, such as safety-critical and mission-critical tasks. The fluctuation of inference quality seriously affects its practical applications, and the quality at the tail may lead to severe consequences. State-of-the-art and state-of-the-practice with outstanding inference quality designed and trained under loose constraints still have poor inference quality under constraints with practical application significance. On the one hand, the neural network models must be deployed on complex systems with limited resources. On the other hand, safety-critical and mission-critical tasks need to meet more metric constraints while ensuring high inference quality. We coin a new term, ``tail quality,'' to characterize this essential requirement and challenge. We also propose a new metric, ``X-Critical-Quality,'' to measure the inference quality under certain constraints. This article reveals factors contributing to the failure of using state-of-the-art and state-of-the-practice algorithms and systems in real scenarios. Therefore, we call for establishing innovative methodologies and tools to tackle this enormous challenge.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Previous computation models either have equivalent abilities in representing all computations but fail to provide primitive operators for programming complex algorithms or lack generalized expression ability to represent newly-added computations. This article presents a unified computation model with generalized expression ability and a concise set of primitive operators for programming high-level algorithms. We propose a unified data abstraction -- Tensor of List, and offer a unified computation model based on Tensor of List, which we call the ToL model (in short, ToL). ToL introduces five atomic computations that can represent any elementary computation by finite composition, ensured with strict formal proof. Based on ToL, we design a pure-functional language -- ToLang. ToLang provides a concise set of primitive operators that can be used to program complex big data and AI algorithms. Our evaluations show ToL has generalized expression ability and a built-in performance indicator, born with a strictly defined computation metric -- elementary operation count (EOPs), consistent with FLOPs within a small error range.
translated by 谷歌翻译
This paper targets unsupervised skeleton-based action representation learning and proposes a new Hierarchical Contrast (HiCo) framework. Different from the existing contrastive-based solutions that typically represent an input skeleton sequence into instance-level features and perform contrast holistically, our proposed HiCo represents the input into multiple-level features and performs contrast in a hierarchical manner. Specifically, given a human skeleton sequence, we represent it into multiple feature vectors of different granularities from both temporal and spatial domains via sequence-to-sequence (S2S) encoders and unified downsampling modules. Besides, the hierarchical contrast is conducted in terms of four levels: instance level, domain level, clip level, and part level. Moreover, HiCo is orthogonal to the S2S encoder, which allows us to flexibly embrace state-of-the-art S2S encoders. Extensive experiments on four datasets, i.e., NTU-60, NTU-120, PKU-MMD I and II, show that HiCo achieves a new state-of-the-art for unsupervised skeleton-based action representation learning in two downstream tasks including action recognition and retrieval, and its learned action representation is of good transferability. Besides, we also show that our framework is effective for semi-supervised skeleton-based action recognition. Our code is available at https://github.com/HuiGuanLab/HiCo.
translated by 谷歌翻译
Cross-Lingual Summarization (CLS) aims at generating summaries in one language for the given documents in another language. CLS has attracted wide research attention due to its practical significance in the multi-lingual world. Though great contributions have been made, existing CLS works typically focus on short documents, such as news articles, short dialogues and guides. Different from these short texts, long documents such as academic articles and business reports usually discuss complicated subjects and consist of thousands of words, making them non-trivial to process and summarize. To promote CLS research on long documents, we construct Perseus, the first long-document CLS dataset which collects about 94K Chinese scientific documents paired with English summaries. The average length of documents in Perseus is more than two thousand tokens. As a preliminary study on long-document CLS, we build and evaluate various CLS baselines, including pipeline and end-to-end methods. Experimental results on Perseus show the superiority of the end-to-end baseline, outperforming the strong pipeline models equipped with sophisticated machine translation systems. Furthermore, to provide a deeper understanding, we manually analyze the model outputs and discuss specific challenges faced by current approaches. We hope that our work could benchmark long-document CLS and benefit future studies.
translated by 谷歌翻译
This paper presents a Generative RegIon-to-Text transformer, GRiT, for object understanding. The spirit of GRiT is to formulate object understanding as <region, text> pairs, where region locates objects and text describes objects. For example, the text in object detection denotes class names while that in dense captioning refers to descriptive sentences. Specifically, GRiT consists of a visual encoder to extract image features, a foreground object extractor to localize objects, and a text decoder to generate open-set object descriptions. With the same model architecture, GRiT can understand objects via not only simple nouns, but also rich descriptive sentences including object attributes or actions. Experimentally, we apply GRiT to object detection and dense captioning tasks. GRiT achieves 60.4 AP on COCO 2017 test-dev for object detection and 15.5 mAP on Visual Genome for dense captioning. Code is available at https://github.com/JialianW/GRiT
translated by 谷歌翻译
The image captioning task is typically realized by an auto-regressive method that decodes the text tokens one by one. We present a diffusion-based captioning model, dubbed the name DDCap, to allow more decoding flexibility. Unlike image generation, where the output is continuous and redundant with a fixed length, texts in image captions are categorical and short with varied lengths. Therefore, naively applying the discrete diffusion model to text decoding does not work well, as shown in our experiments. To address the performance gap, we propose several key techniques including best-first inference, concentrated attention mask, text length prediction, and image-free training. On COCO without additional caption pre-training, it achieves a CIDEr score of 117.8, which is +5.0 higher than the auto-regressive baseline with the same architecture in the controlled setting. It also performs +26.8 higher CIDEr score than the auto-regressive baseline (230.3 v.s.203.5) on a caption infilling task. With 4M vision-language pre-training images and the base-sized model, we reach a CIDEr score of 125.1 on COCO, which is competitive to the best well-developed auto-regressive frameworks. The code is available at https://github.com/buxiangzhiren/DDCap.
translated by 谷歌翻译
学习优化是一个快速增长的领域,旨在使用机器学习(ML)来解决优化问题或改善现有的优化算法。特别是,图形神经网络(GNN)被认为是用于优化问题的合适ML模型,其变量和约束是置换的 - 例如线性程序(LP)。尽管文献报道了令人鼓舞的数值结果,但本文确定了将GNN应用于解决LP的理论基础。给定LPS的任何尺寸限制,我们构造了一个GNN,该GNN将不同的LP映射到不同的输出。我们表明,正确构建的GNN可以可靠地预测广泛类别中每个LP的可行性,界限和最佳解决方案。我们的证明是基于最近发现的Weisfeiler-Lehman同构测试与GNN之间的联系。为了验证我们的结果,我们培训了一个简单的GNN,并提出了将LP映射到其可行性和解决方案中的准确性。
translated by 谷歌翻译
幼稚的贝叶斯在许多应用中广泛使用,因为它具有简单性和处理数值数据和分类数据的能力。但是,缺乏特征之间的相关性建模会限制其性能。此外,现实世界数据集中的噪声和离群值也大大降低了分类性能。在本文中,我们提出了一种功能增强方法,该方法采用堆栈自动编码器来减少数据中的噪声并增强幼稚贝叶斯的判别能力。提出的堆栈自动编码器由两个用于不同目的的自动编码器组成。第一个编码器缩小了初始特征,以得出紧凑的特征表示,以消除噪声和冗余信息。第二个编码器通过将功能扩展到更高维度的空间中来增强特征的判别能力,从而使不同类别的样品在较高维度的空间中可以更好地分离。通过将提出的功能增强方法与正规化的幼稚贝叶斯集成,该模型的歧视能力得到了极大的增强。在一组机器学习基准数据集上评估所提出的方法。实验结果表明,所提出的方法显着且始终如一地优于最先进的天真贝叶斯分类器。
translated by 谷歌翻译
在许多分类模型中,数据被离散化以更好地估计其分布。现有的离散方法通常是针对最大化离散数据的判别能力的,同时忽略了分类中数据离散化的主要目标是改善概括性能。结果,数据往往会超出许多小型垃圾箱,因为数据没有离散化保留了最大判别信息。因此,我们提出了一个最大依赖性最差(MDMD)标准,该标准可最大程度地提高离散数据的判别信息和概括能力。更具体地说,最大依赖性标准可最大化离散数据和分类变量之间的统计依赖性,而最小差异标准则明确最大程度地减少了给定离散方案的训练数据与验证数据之间的JS差异。拟议的MDMD标准在技术上很有吸引力,但是很难可靠地估计属性的高阶联合分布和分类变量。因此,我们进一步提出了一个更实用的解决方案,最大值 - 差异 - 差异(MRMD)离散方案,其中每个属性通过同时最大化判别信息和离散数据的概括能力分别离散化。将提出的MRMD与45个机器学习基准数据集的Naive Bayes分类框架下的最新离散算法进行了比较。它大大优于大多数数据集上所有比较的方法。
translated by 谷歌翻译